Time series, sets of sequences in chronological order, are essential data in statistical research with many forecasting applications. Although recent performance in many Transformer-based models has been noticeable, long multi-horizon time series forecasting remains a very challenging task. Going beyond transformers in sequence translation and transduction research, we observe the effects of down-and-up samplings that can nudge temporal saliency patterns to emerge in time sequences. Motivated by the mentioned observation, in this paper, we propose a novel architecture, Temporal Saliency Detection (TSD), on top of the attention mechanism and apply it to multi-horizon time series prediction. We renovate the traditional encoder-decoder architecture by making as a series of deep convolutional blocks to work in tandem with the multi-head self-attention. The proposed TSD approach facilitates the multiresolution of saliency patterns upon condensed multi-heads, thus progressively enhancing complex time series forecasting. Experimental results illustrate that our proposed approach has significantly outperformed existing state-of-the-art methods across multiple standard benchmark datasets in many far-horizon forecasting settings. Overall, TSD achieves 31% and 46% relative improvement over the current state-of-the-art models in multivariate and univariate time series forecasting scenarios on standard benchmarks. The Git repository is available at https://github.com/duongtrung/time-series-temporal-saliency-patterns.
translated by 谷歌翻译
机器学习(ML)为生物处理工程的发展做出了重大贡献,但其应用仍然有限,阻碍了生物过程自动化的巨大潜力。用于模型构建自动化的ML可以看作是引入另一种抽象水平的一种方式,将专家的人类集中在生物过程开发的最认知任务中。首先,概率编程用于预测模型的自动构建。其次,机器学习会通过计划实验来测试假设并进行调查以收集信息性数据来自动评估替代决策,以收集基于模型预测不确定性的模型选择的信息数据。这篇评论提供了有关生物处理开发中基于ML的自动化的全面概述。一方面,生物技术和生物工程社区应意识到现有ML解决方案在生物技术和生物制药中的应用的限制。另一方面,必须确定缺失的链接,以使ML和人工智能(AI)解决方案轻松实施在有价值的生物社区解决方案中。我们总结了几个重要的生物处理系统的ML实施,并提出了两个至关重要的挑战,这些挑战仍然是生物技术自动化的瓶颈,并减少了生物技术开发的不确定性。没有一个合适的程序;但是,这项综述应有助于确定结合生物技术和ML领域的潜在自动化。
translated by 谷歌翻译
时间序列数据在研究以及各种工业应用中无处不在。有效地分析可用的历史数据并提供对未来的见解,使我们能够做出有效的决策。最近的研究见证了基于变压器的架构的出色表现,尤其是在《远距离时间序列》的政权预测中。但是,稀疏变压器体系结构的当前状态无法将其简化和上取样过程磨损,无法以与输入相似的分辨率产生输出。我们提出了基于新颖的Y形编码器架构的Yformer模型,该架构(1)在U-NET启发的体系结构中使用从缩小的编码层到相应的UPSMPLED DEXODER层的直接连接,(2)组合了降尺度/降压/以稀疏的注意来提高采样,以捕获远距离效应,(3)通过添加辅助重建损失来稳定编码器堆栈。已经在四个基准数据集上使用相关基线进行了广泛的实验,与单变量和多元设置的艺术现状相比,MAE的平均改善为19.82,18.41百分比和13.62,11.85百分比MAE。
translated by 谷歌翻译
Recognizing handwriting images is challenging due to the vast variation in writing style across many people and distinct linguistic aspects of writing languages. In Vietnamese, besides the modern Latin characters, there are accent and letter marks together with characters that draw confusion to state-of-the-art handwriting recognition methods. Moreover, as a low-resource language, there are not many datasets for researching handwriting recognition in Vietnamese, which makes handwriting recognition in this language have a barrier for researchers to approach. Recent works evaluated offline handwriting recognition methods in Vietnamese using images from an online handwriting dataset constructed by connecting pen stroke coordinates without further processing. This approach obviously can not measure the ability of recognition methods effectively, as it is trivial and may be lack of features that are essential in offline handwriting images. Therefore, in this paper, we propose the Transferring method to construct a handwriting image dataset that associates crucial natural attributes required for offline handwriting images. Using our method, we provide a first high-quality synthetic dataset which is complex and natural for efficiently evaluating handwriting recognition methods. In addition, we conduct experiments with various state-of-the-art methods to figure out the challenge to reach the solution for handwriting recognition in Vietnamese.
translated by 谷歌翻译
Image captioning is currently a challenging task that requires the ability to both understand visual information and use human language to describe this visual information in the image. In this paper, we propose an efficient way to improve the image understanding ability of transformer-based method by extending Object Relation Transformer architecture with Attention on Attention mechanism. Experiments on the VieCap4H dataset show that our proposed method significantly outperforms its original structure on both the public test and private test of the Image Captioning shared task held by VLSP.
translated by 谷歌翻译
In this paper, we present a robust and low complexity deep learning model for Remote Sensing Image Classification (RSIC), the task of identifying the scene of a remote sensing image. In particular, we firstly evaluate different low complexity and benchmark deep neural networks: MobileNetV1, MobileNetV2, NASNetMobile, and EfficientNetB0, which present the number of trainable parameters lower than 5 Million (M). After indicating best network architecture, we further improve the network performance by applying attention schemes to multiple feature maps extracted from middle layers of the network. To deal with the issue of increasing the model footprint as using attention schemes, we apply the quantization technique to satisfies the number trainable parameter of the model lower than 5 M. By conducting extensive experiments on the benchmark datasets NWPU-RESISC45, we achieve a robust and low-complexity model, which is very competitive to the state-of-the-art systems and potential for real-life applications on edge devices.
translated by 谷歌翻译
预测基金绩效对投资者和基金经理都是有益的,但这是一项艰巨的任务。在本文中,我们测试了深度学习模型是否比传统统计技术更准确地预测基金绩效。基金绩效通常通过Sharpe比率进行评估,该比例代表了风险调整的绩效,以确保基金之间有意义的可比性。我们根据每月收益率数据序列数据计算了年度夏普比率,该数据的时间序列数据为600多个投资于美国上市大型股票的开放式共同基金投资。我们发现,经过现代贝叶斯优化训练的长期短期记忆(LSTM)和封闭式复发单元(GRUS)深度学习方法比传统统计量相比,预测基金的Sharpe比率更高。结合了LSTM和GRU的预测的合奏方法,可以实现所有模型的最佳性能。有证据表明,深度学习和结合能提供有希望的解决方案,以应对基金绩效预测的挑战。
translated by 谷歌翻译
随着深度学习模型逐渐成为时间序列预测的主要主力,因此,对抗性攻击下对预测和决策系统的潜在脆弱性已成为近年来的主要问题。尽管对单变量时间序列的预测开始研究这种行为和防御机制,但关于多变量预测的研究仍然很少,由于其在不同时间序列之间编码相关性的能力,通常是优选的。在这项工作中,我们考虑到攻击预算约束和多个时间序列之间的相关结构,研究和设计对多元概率预测模型的对抗性攻击。具体而言,我们研究了稀疏的间接攻击,该攻击仅通过攻击少数其他项目的历史来节省攻击成本,从而损害了项目(时间序列)的预测(时间序列)。为了打击这些攻击,我们还制定了两种防御策略。首先,我们采用随机平滑度到多元时间序列方案,并通过经验实验验证其有效性。其次,我们利用稀疏的攻击者来实现端到端的对抗训练,从而提供强大的概率预测者。对REAL数据集进行的广泛实验证实,与其他基线防御机制相比,我们的攻击方案具有强大的功能,并且我们的防御算法更有效。
translated by 谷歌翻译
格子振动频率与许多重要的材料属性有关,例如热和导电性以及超导性。然而,使用密度泛函理论(DFT)方法的振动频率的计算计算过于计算地要求大量的材料筛选样本。在这里,我们提出了一种基于深度的基于神经网络的基于神经网络的算法,用于预测具有高精度的晶体结构的晶振频率。我们的算法使用零填充方案来解决振动频谱的变量尺寸。有关15,000和35552个样本的两个数据集的基准研究表明,汇总$ ^ 2 $分别分别达到0.554和0.724。我们的作品展示了深图神经网络的能力,除了输出尺寸是恒定的状态(DOS)和电子DOS的声子密度之外,还可以学习晶体结构的声光谱性能。
translated by 谷歌翻译
In federated learning problems, data is scattered across different servers and exchanging or pooling it is often impractical or prohibited. We develop a Bayesian nonparametric framework for federated learning with neural networks. Each data server is assumed to provide local neural network weights, which are modeled through our framework. We then develop an inference approach that allows us to synthesize a more expressive global network without additional supervision, data pooling and with as few as a single communication round. We then demonstrate the efficacy of our approach on federated learning problems simulated from two popular image classification datasets. 1
translated by 谷歌翻译